MS BackOffice Unleashed

Previous Page TOC Next Page



— 3


Security Environment


The previous chapter covered the basics of the Windows NT and BackOffice architectures. This and the next several chapters explore some of the major portions of this architecture in more detail. This chapter is devoted to the security environment. Some of you may not be interested in security at all. Perhaps you think that information should be accessible to everyone or something like that. Others of you may live and breathe security. It may be the most important decision when designing new systems in your super-classified environment. Whatever your philosophy about security, there is at least some minimal level of security implemented in the most basic BackOffice installations, so you need to know about this topic to make your system work.

This chapter is somewhat of a balancing act—enough information to understand the security environment, but no excessive information about the internal mechanisms and so forth.

General Computer Security Concepts


There are entire books on computer security out there and this is not one of them. Instead, this is a reasonable introduction to make sure that you are ready when you look into the implementation of security in BackOffice. The following general security concepts are important for the discussions in the rest of this chapter:

The first layer of computer security that you will typically deal with is the system access layer. Figure 3.1 shows how the various security concepts fit together. You could say that if people can't access a system that they should not have access to, you have a relatively safe system. Typically, this form of security is implemented with a user ID and password combination that gives some assurance that the person requesting access to the system is really the person who has access to the system.

FIGURE 3.1. Interaction of various security concepts.

Another major component in providing access security is limiting who has access to the system and from where they have this access. Both of these concepts can really help increase your overall security. Perhaps you are one of those poor administrators who has to enable access to your server from anyone connected to your internal network, the Internet, and a large dial-in modem pool. You have no way to limit access and will have to rely on some of the other concepts discussed later in this chapter. However, if you do have the ability to control your access, you may want to limit the users who have access to your system (that is, not give out user IDs to people who are unlikely to use the system). Access to the computer system is what makes it actually useful, of course, so perhaps instead you may just want to limit where the users have access to your computer from. You may want to enable users to transfer information to and from disk drives only when they are connected from certain workstations that are in public areas and therefore would be less comfortable about doing mass downloads of corporate data or playing around to set up a virus.

Let's spend a few moments on the user ID and passwords that will be used to control access. User IDs control very little. They get sent with electronic mail messages, and most organizations have standards that enable you to determine user IDs just by knowing their names (for example, jgreene). The password, then, is the place where most of the security is enforced. In the early days, people could enter anything (or nothing) for their passwords. Some places enable very simple patterns (such as 12345), and others issue policies requiring relatively complex passwords. An example of a relatively complex password is one that is assigned by a random generator that links two short words together (for example, treedogs). Modern operating systems enable you to enforce policies that keep users from choosing relatively simple passwords (or none at all). A nice feature of Windows NT is that the administrator can choose to install the password security policies or let the users do whatever they want, as is appropriate for a given environment.

Before you leave the topic of system access, you should know about a few things that are coming on the horizon (or are already here for some frontline organizations). User IDs and passwords can be compromised or hacked. There is no way a system can provide security if users write their password on a sticky note and attach it to the computer. Some places need more security than user IDs and passwords can provide, and they are looking to advanced technologies to help them. Among the more promising technologies for the near term are access cards and voice recognition. Access cards have been around for some time, but they are reaching the point where they are moving out of the deeply classified world and into the "real" world. The basic concept behind this is that you use a card reader attached to your computer that either sends an identification code (indicating the computer is being used by a user with a valid key card) or actually scramble the signals for transmission and descrambling at the other end of the transmission. Voice interfaces are some interesting technologies that are starting to be supported in the standard Windows API set. This could be used to have you speak your name and have your voice pattern recognized against a master pattern stored when your account was created. I have a package that I played with that uses speech recognition to navigate through the Windows menu system (for example, saying "Excel" launches the Excel spreadsheet application). You can’t say that there is a shortage of fun technologies that are being worked on today that will probably be standard equipment in the not too distant future.

The next general security topic is resource access. The term "resource" is used to describe anything on a computer that a user would want to access. Example of resources include disk directories, disk files, printers, or even applications that are running. You start thinking about limiting access to resources when you have a number of different groups accessing a single computer. Taking a BackOffice example, you would want to enable BackOffice and Windows NT system administrators to access various administrative and monitoring programs that you would prefer to restrict from the regular system users. You might also want to restrict sensitive personnel information (such as performance appraisal results and salaries) from all users except those in the personnel department and management.

The first task in the process of restricting access is identifying the various resources on a computer system. You could, for example, make each file a separate resource and go through a process of granting, on a user-by-user basis, access to that file. This would consume quite a bit of time. Another system would have you group files into directories and then grant access to the directories based on access needs. Still other schemes might function on a whole disk or multiple disk basis.

The next task would be to actually perform the access grants. This also could be quite a time-consuming process on large systems. Most security schemes give permissions to users based on their jobs or the group in which they function (that is, payroll clerks can access financial information, and human resources types can access the personnel system data). Most operating systems mimic this real world organization by enabling system administrators to define groups of users and then associate individual users with the appropriate groups. They can then grant privileges to groups. You can still grant privileges to individual users, but you would do this only if you absolutely had to.

The third concept in relationship to security is the client-server model for computer system access. Figure 3.2 contrasts the client-server and host terminal methodologies. The host terminal access method matches the concepts discussed previously for system access. You identify yourself to the system and then go to either a command prompt or a graphical user interface to perform some processing. However, many modern computer environments are designed to enable you to be logged on to one computer yet access resources that are located on another computer. These resources that are accessed would include databases, directories that are shared, and even remote printers.

FIGURE 3.2. Client-server and host terminal access models.

This presents a few additional challenges for a security system. First, because you have not formally logged on to the host computer system, you need to be authenticated on a request-by-request basis. There are a couple of ways that the server can approach validating your access rights. First, it could just ask you to supply your user ID and password each time you want some information from it. Although this would technically work, it would be an extreme burden on users who are frequently accessing resources on other computers. A second approach would be that you would be asked to identify yourself the first time you tried to access that host computer and it would then permit you to continue to access its resources until you log off the system. The final basic approach would be to enable you to log on to the entire network of computers once and then get access to any resources to which you are entitled based on a security token that you are given upon that first login.

There is one complication to this client-server world (yes, there are almost always complications). Some resources that you wish to access are controlled by the operating system itself (shared directories or printers). Other resources are under the control of applications that are running on the server that have implemented their own security scheme. A classic example of this would be an Oracle database management system that is running on your server. To provide additional speed, the communications protocols have been set up to route signals for the database directly from the TCP/IP communications processes to the Oracle utilities. There is no operating system security involved with this process (you could, of course, limit the TCP/IP transmissions or cut off signals to the database). This was originally designed for operating systems that had little security and a lot of overhead. Today, however, it means that you may have to implement security within your application packages if they are connected directly to the communications services of the operating system. Note that BackOffice components are designed to utilize the security mechanisms of the operating system, which is good news for the poor overworked administrators.

Application security is the next security concept and it fits in nicely with the last discussion on the client-server access model. I know a lot of developers who have been implementing security within their applications for so many years that they automatically do it when they build a new application. That worked well in the client-server model on operating systems where security varied in quality and was not usually that good. Today however, you have large groups of programmers making operating system secure. They have more time to build in security and when implemented at the operating system level, it is usually faster.

There are reasons to consider using application security instead of or in addition to operating system security. If your application is to run on a large number of platforms and you want to provide a consistent administration interface, application security is probably the way to go. There is a price, however. When you use application security, someone has to go into the administrative programs you write after having learned your rules for granting privileges and then maintain user accounts. It is easiest on those who have to do the administration if you could access the operating system security mechanisms to validate who the user requesting the information is and then see determine the appropriate access rights.

Now for a truly disgusting topic. Yes, computer viruses annoy me no end. Computers can be used for so many good purposes. They can make your work easier and get you home a little earlier each day (unless you have one of those bosses who expands the workload to ensure that he gets a 10-hour day out of everyone no matter what!). Some computers run critical applications that can affect safety of personnel. Yet there are some really sick people out there who get great joy out of writing software programs that harass or damage other people’s computers. Worse yet, the virus problem causes information system types to have to spend time trying to look out for and combat viruses. Companies spend money on virus software and then lack money to upgrade all those old 80286 computers that are sitting on people’s desks.

Viruses are a reality and you have to think about protecting your machines from programs that come in and destroy data for the fun of it just as much as you have to protect that data from corruption by a disgruntled employee. Security mechanisms can be quite useful in preventing viruses. Most viruses are designed to access special operating system functions to do their damage (for example, destroy tables containing the list of files on a disk drive). If the operating system is enforcing tight security, viruses have fewer holes to get through. The account auditing tools can also detect common signs of viruses and warn the administrator of suspicious activities.

Hacking is a topic closely related to viruses. For the purposes of this discussion, hackers are described as people trying to get access to resources that they are not entitled to access. Just as there are folks out there working into the wee hours of the morning writing computer viruses, there are those folks who spend their nights trying to hack into other people’s computers. Some have made it a symbol of a computer counterculture to say what computers they have been able to break into. If it were just bragging rights, it might be tolerable. However, it often goes way beyond access for the sake of access. Imagine the value of your critical business plans or even source code for your new product release. Suppose someone could get into your computer and provide this information to competitors. Of course, while in your computer, the hacker could also wipe out your hard disks or any number of other annoying tactics.

There are a number of security mechanisms that can come into play to minimize the risk of hackers. First, the login mechanisms can be set to disable accounts that have had too many failed logins in a row (a sign of a hacker trying to guess passwords by trying the user ID, 12345, and so forth). Another useful tool is to have the auditing mechanisms record data about failed logins so that the administrator can detect patterns of suspicious activity. If the security system requires users to pick nontrivial passwords and change them on a routine basis, that can really hurt the hacker’s efforts.

With an apology to the security experts in the audience who may have found this discussion too basic, the rest of you should appreciate the basics of computer security at this point. This will prepare you for the next section where you consider some alternative security schemes. This will set the stage for the following section, in which you explore the security scheme implemented in Windows NT and BackOffice.

Alternative Security Architectures


Why discuss alternative security architectures when all you really need to know is the one that is implemented for BackOffice? My goal is to give you an appreciation of the alternatives that the Microsoft team had to choose from when they implemented NT and BackOffice. This will also enable you to appreciate the relative strengths and implications of the strategy Microsoft has chosen. First, you get a quick overview, with a list of pros and cons for each strategy, that will lead you into the BackOffice specific discussions.

A Bit of History


In ancient times (or actually not all that many years ago), computer applications consisted of card decks that were processed by card readers and enormous computer complexes that were only a fraction as powerful as the computers that most grade school students play with. The only resource that you accessed was the computer itself. The security scheme consisted of a job control card that identified you as a user and maybe even required a password. Actually, I believe the job control card was less a security mechanism than a means to ensure that the data center could properly bill the users for services (it cost a fortune back then). Your application had to contain all the software and data that your program was going to work with. Although this allowed a very simple security scheme, it made application programming a tedious task (imagine having a punch card for every transaction that occurred to your general ledger so that you could run a simple report).

Computer types of that era recognized that this was not a very productive environment (and hated to see adults crying when they dropped their decks of computers cards and watched them scatter over the floor). Storage technologies were soon developed that enabled users to store large (for the time) data sets on magnetic disk drives. This freed users from needing to store all their data in card decks but created the operating system problem of having to control access to information. You would not, for example, want university students to have access to the databases that contained their grades (talk about grade inflation).

One of the early schemes involved creating levels of users (somewhat analogous to groups, but with a hierarchy that enabled people at the top to access anything). You could access information based on your level of access (administrators could get everything and undergraduate users could not even access the disk storage systems). This was a good start, but then you got into conflicts when the scheduling people did not want the professors to see their information and everyone became concerned about personnel and financial information access. The good news was that as financial information started to be stored on computers, money started to become available to fund development (they were real systems and not toys for the engineers).

Computers continued to evolve and they started to get more resources attached to them. In addition to disk drives, there were printers, plotters, and tape drives. Because paying for the computers was a big concern in this era, where a machine that was still much dumber than today’s common PC cost millions of dollars, I always sensed that money was more of a concern in most of the places that I worked than was some philosophical concern about controlling access to information. There were prehistoric hackers back then who managed to figure ways around the security system and store files that were too large for their disk packs on someone else’s disk packs. However, for the most part, the basic login ID, password, and resource access privilege scheme as a security system was in place and functioning.

This worked fine for most university and business organizations. They got away with limiting who had access to the system. Back then, only a few people were enabled to run jobs and they were supposedly trusted individuals. The masses were limited to reading printouts prepared by the data center. However, this was not good enough for many government agencies around the world. The military and intelligence communities needed to store and process large volumes of information. The computer was the only tool that could manage this information overload. When large numbers of people started to need access to data, it became important to be able to closely control who had access to individual files as opposed to merely just controlling access to the computer itself. Some other issues such as auditing to determine who actually did access information in the past also became important.

These needs led governments to pour a lot of money into research on computer operating systems, network architectures, and security systems. This work was split between universities, laboratories, and private companies who tried a number of different concepts to see what would work best. It started as a series of add-on packages that you could install on your computer to increase its security level. The bad news was that these add-on packages often disabled a number of your applications, which were performing basic operations that the security package deemed to be illegal. So you wound up rewriting applications and going through other changes. Many organizations also changed security systems one or more times as the products that they started with went out of business, or evolved into a new scheme that could do more or one that used less system resources to provide security.

Part of this funding also helped to develop more advanced computer networks to link computers. This enabled users to access multiple computers, which contained different bits of information that were needed to get their work done. The peer-to-peer networks (all computers are treated as equals regardless of their speed ratings or amount of disk storage) and even the beginnings of client-server computing came into being. The bad news was that these networking alternatives created new security problems that had to be solved as you tried to control access to shared disk drive information without having the advantages of a controlled login on a terminal that was on a controlled terminal network. Some of the problems with network security are still being worked on today, although most operating systems can now handle this world well, if not with perfection.

Another interesting development is the newer operating systems that have come into being. DEC’s VMS operating system was probably one of the first commercial operating systems written after the days of card decks. It had a lot of terminal and resource security built directly into the operating system itself, as opposed to security being a separate, add-on package. It also was one of the first major operating systems to assimilate networking technologies that were evolving from some of the research mentioned earlier. UNIX came into its own at about the same time, and it quickly supported minimal security and networking (although networking was a later add-on, the highly modular nature of UNIX enabled it to be assimilated quickly). UNIX never really picked up heavily on the security theme until the government started to use it, but there are now some UNIX systems that have relatively high security ratings (although this often required a complete rewrite of the UNIX kernel and other major components to make it look like traditional UNIX but have security built into its heart).

Finally, there are operating systems such as Novell Netware, IBM OS/2, and Windows NT that have come about in the modern era of networked computers, intelligent workstations, and nonbatch operating systems. They have the advantage of being built from the ground up for this new world. They have also been optimized for distributed processing, where you have a large number of smaller computers as opposed to one giant computer with a number of clients. From a security point of view, these operating systems have the advantage of being built during an era where the standard level of security was much higher than when IBM started building mainframes. Therefore, the designers put more security features into the basic architecture, which usually results in a higher level of security and less overhead.

Security Models


With that brief discussion of history aside, let's look at the more common security models. Figure 3.3 illustrates these basic alternatives. One important point to keep in mind is that each of these alternatives has advantages and disadvantages. For example, if every computer had a security system that was complex and secure enough for the CIA or NSA, the only person who would be able to log on to your home computer would probably be one of your kids (they are so much better at computers than many adults that I know). That would not be very much use and is just not needed for most home PCs.

FIGURE 3.3. Alternative security models.

The first model is simple access, which is used in the typical PC environment. You have a user ID and password (maybe) to access the system. Once you are at the login prompt (in the DOS case) or the graphical user interface (Windows) you have full control over everything on the system. Unless someone has encrypted files or done some other special application security features, you have full access to all data and other resources. This model is the simplest to deal with when you have a truly "personal" computer where there is one person who uses it and has full access to all its resources.

The next model is resource password security, in which you assign a password to each resource. For example, you have a shared accounting data directory that you put the password of IRS on and a shared engineering data directory that you put the password of EINSTEIN on. This is also fairly simple to initially configure and, assuming that you do not choose dreadfully simple passwords such as the ones in the example, it can work fairly well. The problems with this system typically occur when individuals leave the group for which the data was created (or get fired). In this case, you may want to consider changing the password to ensure that the data is not viewed by these former group members. If you have an organization that has a reasonable turnover rate, this can cause a lot of headaches because you have to change the passwords, as do your users.

The third model is user access security. In this model, you identify yourself as a user to the computer that has the resource that you want by entering your user ID and password as it is recorded on that computer. The system administrator for that particular computer has created an account for you. That administrator then assigns access rights to your account that determine what you have access to. If you leave the company or your job changes, all the administrator has to do is either cancel your account or change your resource access rights.

An extension of the user access security scheme is the group access security scheme. Because most access rights are given to a number of people who work in the same group or even have the same job, it is logical to teach the computer about these groups and assign access rights to the group as a whole. Then all the administrator has to do is create a user account, then assign it to the appropriate group or groups. The administrator can always assign unique rights to an individual if there is something specific about this individual—for example, the person is the database administrator in the development group and therefore needs access not only to the software directories, but also needs special privileges to access the controlling utilities for the database management system.

The schemes proposed so far work for individual computers or even small groups of computers. However, in the world of distributed computer resources where you replace a single large computer with many smaller servers, it could be a nightmare to perform administration on dozens or even hundreds of computers. The basic goal therefore would be to provide a single point for user administration and have all computers access this security information. Microsoft refers to this arrangement as domain access security. There are other vendors who implement the same general concept. You effectively have a login ID, password, and group accesses that are recognized by all computers on the network. The administrator has only to adjust your group structure in one location and you acquire the appropriate rights on any computer that is participating in the domain.

The Windows NT Security Architecture


Recall that Windows NT supports a domain security model. However, you can configure your systems into what Microsoft refers to as a workgroup configuration. This model is basically the group access security scheme discussed in the last section. The workgroup concept came along first. It started about the time Microsoft extended the basic Windows 3.1 operating system to provide native support for computer networking with the Windows for Workgroups product. Before this point, most networking was done through add-on software drivers that enabled the computers to use network cards to access data on servers (most of which were running Novell Netware).

There were a few problems with the older operating systems providing network support. The Windows 3.1 operating system could support additional memory above its original 640K limit, but it was not easy. There were some software components that could be loaded into what was referred to as upper memory (actually there were several regions), but there were others that insisted on being loaded into that precious first 640K. This became a problem because they often had to load network drivers into the lower memory regions and that meant they could not run certain applications that demanded a lot of memory in the lower region.

Workgroups


Microsoft improved the memory management features of the operating system and polished off all of the hooks needed to ensure that the operating system and the network linked together well. It started with a simple peer-to-peer model because it did not have a strong presence in the server operating system market at that time. It appealed to small groups of users who wanted to share Bill’s printer with everyone and let all the word processors use the large disk drive in Tom’s machine to store large files. It was very simple to administer and ran on the resource password security model—Tom could put a password on his disk drive that users had to enter in order to get access to it or he could just leave it without a password.

With Windows NT Server, Microsoft had a product that it could seriously compete with in the server market. Unlike the Windows for Workgroups product, it also had an operating system that had reasonable security and therefore could use more sophisticated security schemes. Therefore, they extended the workgroup concept to enable users to have login IDs and passwords on the NT servers. If you could supply the correct login ID and password, you were then given access to resources based on the permission grants made by the system administrator for the server. The administrator had the choice of making the resource access grants based on login IDs or by group memberships.

Domains


This workgroup scheme was good enough to meet the needs of almost all computer networks of the day. However, things were growing rapidly in the area of PC servers. Larger numbers of users were accessing larger number of servers. To accommodate these needs, Microsoft implemented its domain architecture. Figure 3.4 shows the differences between domains and workgroups. The domain is a concept that applies to both workstations and servers. The key is that you have to have at least one Windows NT server to have domain.

FIGURE 3.4. Workgroups versus domains.

The central security focus in a domain is a Windows NT Server that has been configured to run as a domain controller. You have to be careful when setting up your servers because you cannot switch a server that was configured to run as a workgroup server to run as a domain server. You basically get the pleasure of reinstalling the operating system and rebuilding all your configuration and access data. It is not an easy task, so if you think that you might want to move into the domain architecture, it is best to start out that way before you get too much invested in your current configuration.



If you might want to move to a domain architecture, it is best to configure your network as a domain from the start.

Another point to consider in the planning process is that because this domain controller is the key to all security access information within your little network, what can you do when the domain controller runs into a hardware problem and is shut down? Microsoft has considered this and devised the concept of primary and backup domain controllers. You have one primary domain controller that is the ultimate authority on who has what security access rights. You have the option of implementing one or more backup domain controllers. These computers actually serve two functions. First, they can take over and validate security access information when the primary domain controller is not available. Second, they can also service security information requests from users, thereby reducing the load on the primary domain controllers.

You build a domain network by installing the Windows NT Server operating system on one of your computers with the primary domain controller option specified. (You learn more about installing the Windows NT Server operating system in Chapter 11, "Windows NT Administration.") Once you have this primary domain controller configured, you can install backup domain controllers. You install the Windows NT Server operating system on these machines and select the backup domain controller option. You will be prompted for the name of the domain that this backup domain controller is to join. You will need to supply a valid, domain administrator user ID and password as registered on the primary domain controller. Once you are validated as a domain administrator, the two servers exchange security databases and encrypted security keys with one another. This is all transparent to you at this point. All you see is that once you have completed the server installation process, you can run the administrative tools on the new backup domain controller and see all the same security information that you could see on the primary domain controller.



When you perform security administration tasks on a domain, you are making changes that are replicated on all domain controllers within that domain. You do not have to update the other domain controllers manually. If one of the controllers is down while you are making the administrative changes, the other controllers will provide any updates that have occurred during the downtime to the restarted controller when it starts up.

Imagine that you could write a program that would send out requests to the domain controller and record them, and then you could look through the data that is sent back to figure out how Microsoft encodes all that security access information. That would enable you to figure out, using your user ID, how to hack into other, more privileged accounts. The folks at Microsoft thought of this. Basically, before your computer gets any real security information sent to it, you need to have it join the domain. You join a domain by setting up the network configuration showing you as a member of the domain. To create an account for your computer in that domain, you need to have someone with domain administrator privileges log on to the system. This also assures that you are running a trustworthy operating system (such as Windows NT) that isolates security information from user processes.

Your next question might be which is better—domains or workgroups? I started with workgroups and became quite comfortable with them. They were what I was used to from the Windows for Workgroups world and being in a development environment, I hated the idea of having to keep the servers up to ensure that security information is available. However, one fine day when I was first installing some remote administration tools, I found out that these tools worked only when you were using the domain architecture. That made sense because you need to have a good handle on who is trying to modify user security information, and the secure protocols used in the domain world were probably a good idea.

About a month later, when I tried to install either SMS or Exchange Server, I got a similar error message saying that I had to be a domain in order to install that product. I guess that there is very little that can be done with simple workgroup configurations to provide the kind of security that many users are now demanding for their business applications. From these experiences, I got the basic message. Many of the future business applications that I will run across under Windows NT will be using the advanced security features of domains.

It was quite maddening at first. I tried time and again to get my workgroup server upgraded to a domain server. Finally, I bought a couple of reference books and found that this was not possible. I had to reinstall my server. This also meant that I had to reinstall my Oracle database and a few other things that I wanted on that server. However, once I did this I was surprised to find out how easy it was to administer a domain. I had a little learning to do with regard to adding computers to the domain and a few other areas (which is presented in Chapter 11 and also in Chapter 12, "Windows NT Performance Tuning"). All in all though, I have actually come to prefer domains, because I have one set of user IDs and passwords for all my servers. I then only have to go to the individual servers to set up access rights to resources (and I almost always grant resource privileges by group).

The following are the advantages of working with domains:

The following are the advantages of working with workgroups:


Domain Trust Relationships


Why stop here with the concept of domains? With the way computer networks continue to expand, it is not too difficult to imagine a world where you have access to all computers in your organization to which you need access. One solution might be to create really large domains that cover all locations and groups. Although this would be theoretically possible, there are some practical concerns that must be dealt with. The tight integration of domain controllers needed to enable backups to take over and also share some of the load with the primary controller can cause a problem when you have a large number of controllers spread out over a large area. This is compounded by the fact that almost all wide area networks have much less communications transmission capacity than the local area networks. You would not typically want to bog down the WAN with a lot of user security transmissions.

Another practical concern is service. Users generally like to have access to someone at their location (or at least in their time zone) to get help when problems come up. It would be difficult for many of these users to accept a system where they had to contact some central office (perhaps located on the other side of the world) to get routine account maintenance services. Therefore, local control over computer security will be around in many organizations for some time to come.

To solve this problem of very large networks, Microsoft chose another methodology. It has built its domain security systems to be able to trust the judgment of other domains when it comes to validation of user identity and group privileges. These trust relationships are relatively straightforward. You tell one domain that it should trust the security validations of another domain. Note that the trust relationship is a one-way street. If you tell domain A to trust domain B, it does not mean that domain B will trust domain A. If the controller in domain B validates the user though, domain A will accept that validation and provide access to any of the groups to which that user has been assigned.

Another key point in this architecture is that trust relationships need to be made by domain administrators in both domains. An administrator from domain A cannot just say that domain B is to be trusted. Domain B is not configured to provide validation information to domain A until the domain B administrator says that it is possible to let domain A trust domain B. Once both administrators agree to this one-way trust relationship, the validation information can be transmitted.

This could actually lead to some very big holes in security. Imagine, for example, that all domain administrators use the default roles for domain administrators and users. Would you really want someone from another domain that you trust coming in and acting as an administrator for your domain just because that person is an administrator for the other domain? Not really. Microsoft has implemented the concepts of local groups and global groups. All the groups that you find in your normal domain environment are local groups who have meaning only within the domain. Feel free to create local groups to meet your needs; you do not have to coordinate your group names with everyone in the organization.

Instead, interdomain access rights are defined by membership in global groups. You create global groups the same way that you create local groups; however, you specify the global option. This name is coordinated with the groups that trust you and that you trust. When you add a user to one of these global groups, that information is passed to the other domains when they do a security validation of that user. They then can assign the appropriate access rights to these global groups that are appropriate for visitors in their domains.

One final point related to domains: the trust relationships are point-to-point and not hierarchical. Figure 3.5 illustrates the difference between these two concepts. Suppose that domain A trusts domain B. You also know that domain B trusts domain C. This does not mean that group A trusts domain C. You do not inherit trust relationships from groups that you trust.



Trust relationships are point-to-point, not hierarchical.

FIGURE 3.5. Trust relationships.

There is, of course, a lot more that could be said about the Windows NT security scheme. If you wish additional information, I suggest Windows NT 4.0 Server Unleashed from SAMS Publishing, the Microsoft Windows NT Resource Kit, or the Microsoft TechNet CD-ROM. The following is a summary of the security architecture in which Windows NT and the BackOffice components operate:


Integration of BackOffice with Windows NT Security


I have some good news for you. Once you learn to work with Windows NT security, you have almost all the knowledge that you need to work with the security systems of most BackOffice components. The key here is the capability of these applications (and others developed for the Windows NT network environment) of making calls to the operating system to ask for validation of a user ID and password combination. This is perhaps one of the most useful integration features between other BackOffice components and Windows NT.

The first advantage achieved by using Windows NT security, as opposed to all BackOffice developers making their own security, is that it provides better security. Not only do operating system developers have more time to work on a comprehensive security scheme than most application developers, they have the advantage of being able to access internal structures that are protected by the operating system from access by regular programs. As mentioned earlier, Windows NT security also has the blessing that administrators have to learn only one interface to the security system, which reduces the burden of maintaining user accounts. In this environment, access to a database or electronic mail system is just like access to any other server resource.

A key advantage for users is that they see access to system resources via a single login. Users get quite annoyed when they have a large number of different accounts—one for each server, another for each database, and so forth. If the user IDs and passwords are not coordinated with one another, the users usually wind up forgetting their passwords for one or more systems; then the administrator has to intervene to reset the passwords. It also reduces the risk that they will have to write down all their accounts and passwords on a sticky note that is posted on their computers (some people will still write their single user ID and password on such notes, but there is nothing we can do about them).

The best news about this integration is that it is almost transparent to you. For example, when you add a new user to Microsoft Exchange Server, the only way that you notice it is that you get a really detailed property page after you create the user account. It can be somewhat confusing the first time that you see it (especially if you have just performed an operating system upgrade), because all it says is Server in its title bar. It does not say NT Server, Exchange Server, or SQL Server. As it turns out, it is actually the detailed information (office address, telephone numbers, and all those e-mail preferences, such as whether a return receipt is desired) that is associated with an Exchange Server account. Once you have that figured out, you fill out the details desired in your organization and click on the correct button, and the account is created in both NT Server and Exchange Server.

Using the Security Environment to Your Advantage


No, I am not going to disclose a bunch of secrets gleaned from hacker bulletin boards that will let you access information that you should not have access to. Instead, let's look at how you, as an administrator or developer can set things up so that you derive productivity benefits from the operating system security system. It is a powerful system with defined interfaces. If you choose to take advantage of this powerful environment, you can make your job much easier.

The first suggestion that I have for those organizations where you write local applications is to use the Windows NT security system if you have to do any access validation at all. The API for this interface is defined. If the user is actually on a local computer or is logged in to a domain, you can use the login ID and then map application-specific privileges to this login ID rather than building your own login system. You can also work to interface special data collection mechanisms to the basic User Manager interface in much the same manner Microsoft does for Exchange Server. As an administrator, you do not want to run a separate package to set people up for a user's application in addition to the setup that you go through for operating system or domain accounts.

Spend a little time doing some group planning before you set up your systems. Once you have a large number of users in your system, it can be quite time-consuming if you have to change your groups and then grant all the new privileges. Chapter 10, "Setting Up Windows NT," presents some of the basic planning processes for a server. You can actually build schemes with a hierarchy of privileges that can simplify your later administrative tasks. See Figure 3.6, for example. Here, you have a base set of privileges that you give to all users (such as access to the shared printers and the company-wide information directories). You then add specific groups to capture people with similar privileges. For example, all accounting clerks have a certain set of accesses to update the books, but the accounting manager has a special monthly report directory. Only the manager can write to it, but the rest of the department can read it. If you spend a little time talking to your users and thinking about your organizational structure, you can probably come up with a pretty good scheme to match your needs.

FIGURE 3.6. Hierarchy of group privileges.

The application log is especially recommended for application developers (and system administrators who have input into the application development process), an interface of Event Viewer to which applications can write. (Event Viewer is covered in the next chapter as one of the administrative tools of interest to system administrators.)

The only time that I have seen the application log used is when a BackOffice product writes to it. I have not loaded any non-Microsoft commercial applications that have been set up for this function, but I am sure that they are out there. I have seen locally written applications use this log and it works quite well. The basic concept is easy and it took the group of programmers that I work with less than an hour to figure out how to use this interface. Best of all, once your applications learn to write to this tool, you no longer have to worry about building your own message log viewer or log file cleanup routines. Microsoft provides them for you with Event Viewer.

What kind of messages can you write to Event Viewer? Generally, short text is used as opposed to long dissertations. However, I have seen some fairly complicated messages that dump the status of internal application variables and other such data. The key is that you can record what your application needs. There are several different types of events—informational, warning, and general processing. You can record as little or as much as you want. The only thing that you have to keep in mind when designing your messages is that you might have to read through this log some day when a problem comes up. You do not want to have to wade through a lot of garbage to get to the important stuff.

Summary


The goal of this chapter was to give you a general understanding of the security environment in which your BackOffice applications will be hosted. You should also be comfortable with the concepts of workgroups and domains. Finally, you should have an appreciation of how tightly BackOffice security is coupled with the Windows NT Server security environment.

You may feel a little empty at this point when it comes to security. Although this chapter discusses security system concepts, it does not present anything practical about how to perform even the simple task of creating a user account. Don’t worry, that is what the next couple of chapters are about. With the basics under your belt, you are ready to plunge into the more practical subjects of tools and techniques. Chapter 4, "Monitoring Environment," covers the monitoring environment that you will use to keep track of Windows NT and other BackOffice components from both a performance and security auditing point of view. Chapter 5, "Administrative Environment," presents the administrative tools that you will be using to set up accounts and perform all the other functions that are needed to keep a server functioning happily. Remember, when this book discusses these Windows NT Server tools, it is talking about the tools you will be using for other BackOffice components as well. Many of the BackOffice components have their own little control applications, but they use the Windows NT tools for most common administrative functions. I have only run the Exchange Server administrative application a handful of times and that was mostly to figure out whether there were advanced options that I could play with to improve performance.

Previous Page Page Top TOC Next Page